- 我們昨天已經講解完了最基礎 Regression 的簡易 Pytorch 實作了,那我們今天要稍微煞車一下,把整個 Training 的流程給好好介紹一遍
- 一樣先回歸到機器學習的三個步驟
- 第一步當然是要先決定我們的 Function set 也就是 Model(模型)的部分,Model 會決定我們整個資料選擇擷取的方式方向,因此這步驟當然非常重要,他會決定了我們的判斷狀況
- 第二步就是要來看看"糟糕程度" Loss function ,還有當然就要來更新(在這裡我更喜歡解釋成"優化",因為更新參數本來就是為了更好的判斷結果)
- 最後就是輸出結果
- 這三個步驟在任何地方都不會被改變,因此 Pytorch 的三步驟當然也要遵守這些步驟,讓我們直接看看如何實作吧~
Pytorch 實作優化
Pytorch 演算法引用
- Pytorch 提供了很龐大的資料庫讓我們去引用,裡面有各式各樣常見的模型架構,我們只要搭配相對應的初始參數的給予,就能夠方便快速地建立起相對應的演算法,因此我們要先把套件引用好
import torch
import torch.nn as nn
Pytorch loss function & optimizer
# set up loss function as mean square error
def loss(y, y_predicted):
return ((y_predicted-y) ** 2).mean()
- 那使用 Pytorch 的 loss function 的話會變成
# MSE in pytorch
loss = nn.MSELoss()
- 那我們也要基於 loss function 去對資料做優化,在 Pytorch 有一個工具叫做 optimizer(優化器),那我們只要在裡面放置希望更新的變數,就可以達到變數更新的效果
# set stochastic gradient descent as optimizer
optimizer = torch.optim.SGD([w], lr=learning_rate)
Pytorch 版本的 Regression loss function & optimizer
- 我們已經把 regression 中的 loss function 跟 optimizer 交給 Pytorch,也就是把檢查錯誤狀況跟更新參數的方式給 Pytorch,那我們來看看整個程式會變成怎樣
import torch
import torch.nn as nn
# f = w * x
# f = 2 * x, we set w as 2
x = torch.tensor([1, 2, 3, 4, 5, 6], dtype=torch.float32)
y = torch.tensor([2, 4, 6, 8, 10, 12], dtype=torch.float32)
# init weight
# 這邊要注意,我們希望 Pytorch 幫我們計算更新的 Gradient 變數是 w,所以一定要開 requires_grad 在這個變數上
w = torch.tensor(0.0, dtype=torch.float32, requires_grad=True)
# model prediction
def forward(x):
return w * x
# Training
learning_rate = 0.01
n_iters = 10
print(f'Prediction before training: f(5) = {forward(5): .3f}')
# MSE in pytorch
loss = nn.MSELoss()
# set stochastic gradient descent as optimizer
optimizer = torch.optim.SGD([w], lr=learning_rate)
for epoch in range(n_iters):
# perdiction = forward pass
y_pred = forward(x)
# loss
l = loss(y, y_pred)
# gradient descent is where calculate gradient and update parameters
# so gradient descent here includes gradients and update weights
# 原本在 Python 的 example 還需要自己建立 Gradient 函式
# gradients = backward pass
l.backward() # calculate dl/dw
# update weights
optimizer.step()
# zero gradients
optimizer.zero_grad()
if epoch % 1 == 0:
print(f'epoch {epoch + 1}: w = {w:.3f}, loss = {l:.8f}')
print(f'Prediction after training: f(5) = {forward(5): .3f}')
# Prediction before training: f(5) = 0.000
# epoch 1: w = 0.607, loss = 60.66666794
# epoch 2: w = 1.029, loss = 29.44422913
# epoch 3: w = 1.324, loss = 14.29059505
# epoch 4: w = 1.529, loss = 6.93586159
# epoch 5: w = 1.672, loss = 3.36628079
# epoch 6: w = 1.771, loss = 1.63380527
# epoch 7: w = 1.841, loss = 0.79295844
# epoch 8: w = 1.889, loss = 0.38485837
# epoch 9: w = 1.923, loss = 0.18678898
# epoch 10: w = 1.946, loss = 0.09065709
# Prediction after training: f(5) = 9.731
model select
- 在 Day-15 的時候,我們是自己撰寫 forward pass function,也就是我們的 model 的部分
# init weight
w = torch.tensor(0.0, dtype=torch.float32, requires_grad=True)
# model prediction
def forward(x):
return w * x
- 那如果我們要使用 Pytorch 的預設 model 的話,其實我們就不用撰寫了,只要引用就好,我們甚至不需要 weight init 了,因為 Pytorch 會自己初始化好需要的變數,例如 weight 或是 bias 都會準備好
- 那這邊要注意的是,這些 Model 會需要給予一些需要的參數,神經元傳遞是會有 input 數量 跟 output 數量 的,所以 Model 這邊在使用要注意需要那些參數
- 另外還要注意資料的結構了,對於目前的例子還說,我們是有 6 筆資料,每筆資料 1 個特徵,因此需要調整一下資料狀況
# change data look
x = torch.tensor([[1], [2], [3], [4], [5], [6]], dtype=torch.float32)
y = torch.tensor([[2], [4], [6], [8], [10], [12]], dtype=torch.float32)
n_samples, n_features = x.shape
print(n_samples, n_features)
input_size = n_features
output_size = n_features
model = nn.Linear(input_size, output_size)
...
# optimizer 更新就是 model 的 parameters
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(n_iters):
y_pred = model(x)
...
if epoch % 1 == 0:
[w, b] = model.parameters()
print(f'epoch {epoch + 1}: w = {w[0][0]:.3f}, loss = {l:.8f}')
All Pytorch Regression
- 下面就是把整個 Regression Code 完全更新之後的樣子,給大家做個參考
import torch
import torch.nn as nn
# f = w * x + b
# f = 2 * x + 0, we set w as 2, b as 0
x = torch.tensor([[1], [2], [3], [4], [5], [6]], dtype=torch.float32)
y = torch.tensor([[2], [4], [6], [8], [10], [12]], dtype=torch.float32)
x_test = torch.tensor([5], dtype=torch.float32)
n_samples, n_features = x.shape
print(n_samples, n_features)
input_size = n_features
output_size = n_features
model = nn.Linear(input_size, output_size)
# Training
learning_rate = 0.01
n_iters = 10
print(f'Prediction before training: f(5) = {model(x_test).item(): .3f}')
# MSE in pytorch
loss = nn.MSELoss()
# set stochastic gradient descent as optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(n_iters):
# perdiction = forward pass
y_pred = model(x)
# loss
l = loss(y, y_pred)
# gradient descent is where calculate gradient and update parameters
# so gradient descent here includes gradients and update weights
# 原本在 Python 的 example 還需要自己建立 Gradient 函式
# gradients = backward pass
l.backward() # calculate dl/dw
# update weights
optimizer.step()
# zero gradients
optimizer.zero_grad()
if epoch % 1 == 0:
[w, b] = model.parameters()
print(f'epoch {epoch + 1}: w = {w[0][0]:.3f}, loss = {l:.8f}')
print(f'Prediction after training: f(5) = {model(x_test).item(): .3f}')
每日小結
- 今天已經把整個 Pytorch 的基礎 Training 流程跟結構都說明完畢了,其實可以發現跟機器學習核心概念一毛毛一樣樣,所以在這裡再次強調,基礎打穩用 Framework 就可以事半功倍,讓 Framework 幫我們解決惱人的運算問題
- 到目前為止的所有 Pytorch Coding Example 皆來自 Youtube Pytorch Tutorial,其實這部影片就把很多 Pytorch 使用介紹得非常清楚了,筆者在這裡算是個翻譯,強烈推薦自己去看過一遍,會有幫助
- 明天我們就開始練習建立自己的 Logistic Regression Model 來一樣解決之前 Day-09 的 Logistic Regrssion 實作